Convergence Properties of a Learning Algorithm

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Properties of a Gradual Learning Algorithm for Harmonic Grammar

This paper investigates a gradual on-line learning algorithm for Harmonic Grammar. By adapting existing convergence proofs for perceptrons, we show that for any nonvarying target language, Harmonic-Grammar learners are guaranteed to converge to an appropriate grammar, if they receive complete information about the structure of the learning data. We also prove convergence when the learner incorp...

متن کامل

Convergence Properties and Stationary Points of a Perceptron Learning Algorithm

The Perceptron i s an adaptive linear combiner that has its output quantized to one o f two possible discrete values, and i t is the basic component of multilayer, feedforward neural networks. The leastmean-square (LMS) adaptive algorithm adjusts the internal weights to train the network to perform some desired function, such as pattern recognition. In this paper, we present an analysis o f the...

متن کامل

Convergence Properties of Symmetric Learning Algorithm for Pattern Classification

In the field of adaptive filters, the affine projection algorithm (APA) [l] is well known as a generalized algorithm of the normalized LMS algorithm[2], [3] into the block signal processing. We proposed the geometric learning algorithm (GLA) as an application of the APA to perceptron[4], [S]. The connection weight vector is updated vertically towards the orthogonal complement of k pattern vecto...

متن کامل

Parameter Optimization Algorithm with Improved Convergence Properties for Adaptive Learning

The error in an artificial neural network is a function of adaptive parameters (weights and biases) that needs to be minimized. Research on adaptive learning usually focuses on gradient algorithms that employ problem–dependent heuristic learning parameters. This fact usually results in a trade–off between the convergence speed and the stability of the learning algorithm. The paper investigates ...

متن کامل

On convergence properties of pocket algorithm

The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality. However, the original proof ensures the asymptotic achievement of an optimal weight vector only if the inputs in the training set are integer...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Annals of Mathematical Statistics

سال: 1964

ISSN: 0003-4851

DOI: 10.1214/aoms/1177700406